risk management framework
A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management
Campos, Simeon, Papadatos, Henry, Roger, Fabien, Touzet, Chloé, Murray, Malcolm, Quarks, Otter
The recent development of powerful AI systems has highlighted the need for robust risk management frameworks in the AI industry. Although companies have begun to implement safety frameworks, current approaches often lack the systematic rigor found in other high-risk industries. This paper presents a comprehensive risk management framework for the development of frontier AI that bridges this gap by integrating established risk management principles with emerging AI-specific practices. The framework consists of four key components: (1) risk identification (through literature review, open-ended red-teaming, and risk modeling), (2) risk analysis and evaluation using quantitative metrics and clearly defined thresholds, (3) risk treatment through mitigation measures such as containment, deployment controls, and assurance processes, and (4) risk governance establishing clear organizational structures and accountability. Drawing from best practices in mature industries such as aviation or nuclear power, while accounting for AI's unique challenges, this framework provides AI developers with actionable guidelines for implementing robust risk management. The paper details how each component should be implemented throughout the life-cycle of the AI system - from planning through deployment - and emphasizes the importance and feasibility of conducting risk management work prior to the final training run to minimize the burden associated with it.
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.68)
- Energy > Power Industry > Utilities > Nuclear (0.67)
Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights
Lage, Darren, Pruitt, Riley, Arnold, Jason Ross
This study examines the extent to which U.S. federal agencies responded to and implemented the principles outlined in the White House's October 2022 "Blueprint for an AI Bill of Rights." The Blueprint provided a framework for the ethical governance of artificial intelligence systems, organized around five core principles: safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and explanation about AI systems, and human alternatives and fallback. Through an analysis of publicly available records across 15 federal departments, the authors found limited evidence that the Blueprint directly influenced agency actions after its release. Only five departments explicitly mentioned the Blueprint, while 12 took steps aligned with one or more of its principles. However, much of this work appeared to have precedents predating the Blueprint or motivations disconnected from it, such as compliance with prior executive orders on trustworthy AI. Departments' activities often emphasized priorities like safety, accountability and transparency that overlapped with Blueprint principles, but did not necessarily stem from it. The authors conclude that the non-binding Blueprint seems to have had minimal impact on shaping the U.S. government's approach to ethical AI governance in its first year. Factors like public concerns after high-profile AI releases and obligations to follow direct executive orders likely carried more influence over federal agencies. More rigorous study would be needed to definitively assess the Blueprint's effects within the federal bureaucracy and broader society.
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.48)
The Digital Insider
The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms. "This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values," said Deputy Commerce Secretary Don Graves. "It should accelerate AI innovation and growth while advancing -- rather than restricting or damaging -- civil rights, civil liberties and equity for all."
How To Create a Practical A.I. Risk Management Framework for Your Company
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Simple steps to mitigate A.I. risks in a structured way It's free, we don't spam, and we never share your email address.
Can NIST move 'trustworthy AI' forward with new draft of AI risk management framework?
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Is your AI trustworthy or not? As the adoption of AI solutions increases across the board, consumers and regulators alike expect greater transparency over how these systems work. Today's organizations not only need to be able to identify how AI systems process data and make decisions to ensure they are ethical and bias-free, but they also need to measure the level of risk posed by these solutions.
Artificial Intelligence: NIST Risk Management Framework and Guidance Addressing Bias in AI
As more and more companies are developing and/or utilizing artificial intelligence (AI), it is important to consider risk management and best practices to address issues like bias in AI. The National Institute of Standards and Technology (NIST) recently released a draft of its AI Risk Management Framework (Framework) and guidance to address bias in AI (Guidance). The voluntary Framework addresses risks in the design, development, use, and evaluation of AI systems. The Guidance offers considerations for trustworthy and responsible development and use of AI, notably including suggested governance processes to address bias. The Framework and Guidance will be useful for those who design, develop, use, or evaluate AI technologies.
- North America > United States > Virginia (0.05)
- North America > United States > Colorado (0.05)
- North America > United States > California (0.05)
Why 2022 is only the beginning for AI regulation
Did you miss a session at the Data Summit? As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly. In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used.
- North America > United States > New York (0.05)
- North America > United States > Colorado (0.05)
- North America > Canada (0.05)
- (4 more...)
FLI October 2021 Newsletter - Future of Life Institute
FLI engages on AI Risk Management Framework in the US FLI continues to advise the National Institute of Standards and Technology (NIST) in their development of guidance on artificial intelligence, including in the critically important AI Risk Management Framework. Our latest comments from this month on the Risk Management Framework raised numerous policy issues, including the need for NIST to account for aggregate risks from low probability, high consequence effects of AI systems, and the need to proactively ensure the alignment of evermore powerful advanced or general AI systems.
Risk Management Framework for Machine Learning Security
Breier, Jakub, Baldwin, Adrian, Balinsky, Helen, Liu, Yang
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry. These attacks, along with traditional security threats, can compromise confidentiality, integrity, and availability of organization's assets that are dependent on the usage of machine learning models. While it is not easy to predict the types of new attacks that might be developed over time, it is possible to evaluate the risks connected to using machine learning models and design measures that help in minimizing these risks. In this paper, we outline a novel framework to guide the risk management process for organizations reliant on machine learning models. First, we define sets of evaluation factors (EFs) in the data domain, model domain, and security controls domain. We develop a method that takes the asset and task importance, sets the weights of EFs' contribution to confidentiality, integrity, and availability, and based on implementation scores of EFs, it determines the overall security state in the organization. Based on this information, it is possible to identify weak links in the implemented security measures and find out which measures might be missing completely. We believe our framework can help in addressing the security issues related to usage of machine learning models in organizations and guide them in focusing on the adequate security measures to protect their assets.
Enterprises jump on the AI bandwagon but seat belts are few
Artificial intelligence (AI) is swiftly moving to the mainstream and emerging as a powerful engine for many organizations, prompting them to jump on the AI bandwagon to accelerate growth, innovate, and disrupt the market. The Indian government and industry bodies are extensively focusing on building an AI ecosystem that could help the country to develop and implement cutting-edge solutions (See: New CII forum formed to help build an AI ecosystem). However, according to a recent study, Indian enterprises need to beef up their risk-management capabilities to leverage AI's potential and dodge threats that may emerge after scaling up AI deployments. The study titled, Can enterprise intelligence be created artificially?, commissioned by global consulting major EY and trade association body Nasscom, says that 60% of Indian executive leaders believe that AI will disrupt their businesses within three years. Yet, only 25% of enterprises have deployed AI solutions.
- Information Technology > Security & Privacy (0.99)
- Government (0.72)